Unallocated Memory Space in COMA Multiprocessors

نویسندگان

  • Sujat Jamil
  • Gyungho Lee
چکیده

Cache only memory architecture (COMA) for distributed shared memory multiprocessors attempts to provide high utilization of local memory by organizing the local memory as a large cache, called attraction memory (AM), without traditional main memory. To facilitate caching of replicated data, it is desirable to have some of the physical storage space in the AMs left unallocated, i.e. not utilized as a part of the physical address space. Without the unallocated space, excessive relocation and migration of memory blocks between the AMs can happen due to replacement, negating the very purpose of the AM. It is important in a COMA machine that the operating system maintains a certain amount of unallocated memory space to provide good performance. In this paper, we identify an important relation between the amount of unallocated space and the set associativity of the AM, and discuss the trade-off between additional unallocated memory space and higher set associativity.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bus-Based COMA - Reducing Traffic in Shared-Bus Multiprocessors

A problem with bus-based shared-memory multiprocessors is that the shared bus rapidly becomes a bottleneck in the machine, effectively limiting the machine size to somewhere between ten and twenty processors. We propose a new architecture, the Bus-Based COMA (BB-COMA) that addresses this problem. Compared to the standard UMA architecture, the BB-COMA has lower requirements on bus bandwidth. We ...

متن کامل

Memory Block Relocation in Cache-Only Memory Multiprocessors

COMA machine is similar to that in a traditional shared memory machine, there are a few aspects that differentiate the AMs from the cache memory in traditional cache-coherent multiprocessors [8]. One important aspect unique to COMA is that the backing store of the AMs in a COMA machine is disks of secondary storage. So, unlike a traditional multiprocessor cache, write-back to the backing store ...

متن کامل

Reducing the Replacement Overhead in Bus-Based COMA Multiprocessors

In a multiprocessor with a Cache-Only Memory Architecture (COMA) all available memory is used to form large cache memories called attraction memories. These large caches help to satisfy shared memory accesses locally, reducing the need for node-external communication. However, since a COMA has no back-up main memory, blocks replaced from one attraction memory must be relocated into another attr...

متن کامل

Scheduling to Reduce Memory Coherence Overhead on Coarse-grain Multiprocessors 1 Scheduling to Reduce Memory Coherence Overhead on Coarse-grain Multiprocessors

Some Distributed Shared Memory (DSM) and Cache-Only Memory Architecture (COMA) multiprocessors keep processes near the data they reference by transparently replicating remote data in the processes' local memories. This automatic replication of data can impose substantial memory system overhead on an application since all replicated data must be kept coherent. We examine the eeect of task schedu...

متن کامل

Excel-NUMA: Toward Programmability, Simplicity, and High Performance

ÐWhile hardware-coherent scalable shared-memory multiprocessors are relatively easy to program, they still require substantial programming effort to deliver high performance. Specifically, to minimize remote accesses, data must be carefully laid out in memory for locality and application working sets carefully tuned for caches. It has been claimed that this programming effort is less necessary ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995